Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 16.138
Filtrar
1.
J Clin Epidemiol ; 165: 111189, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38613246

RESUMO

OBJECTIVES: To provide guidance on rating imprecision in a body of evidence assessing the accuracy of a single test. This guide will clarify when Grading of Recommendations Assessment, Development and Evaluation (GRADE) users should consider rating down the certainty of evidence by one or more levels for imprecision in test accuracy. STUDY DESIGN AND SETTING: A project group within the GRADE working group conducted iterative discussions and presentations at GRADE working group meetings to produce this guidance. RESULTS: Before rating the certainty of evidence, GRADE users should define the target of their certainty rating. GRADE recommends setting judgment thresholds defining what they consider a very accurate, accurate, inaccurate, and very inaccurate test. These thresholds should be set after considering consequences of testing and effects on people-important outcomes. GRADE's primary criterion for judging imprecision in test accuracy evidence is considering confidence intervals (i.e., CI approach) of absolute test accuracy results (true and false, positive, and negative results in a cohort of people). Based on the CI approach, when a CI appreciably crosses the predefined judgment threshold(s), one should consider rating down certainty of evidence by one or more levels, depending on the number of thresholds crossed. When the CI does not cross judgment threshold(s), GRADE suggests considering the sample size for an adequately powered test accuracy review (optimal or review information size [optimal information size (OIS)/review information size (RIS)]) in rating imprecision. If the combined sample size of the included studies in the review is smaller than the required OIS/RIS, one should consider rating down by one or more levels for imprecision. CONCLUSION: This paper extends previous GRADE guidance for rating imprecision in single test accuracy systematic reviews and guidelines, with a focus on the circumstances in which one should consider rating down one or more levels for imprecision.


Assuntos
Abordagem GRADE , Processos Grupais , Humanos , Julgamento , Tamanho da Amostra
2.
Neurosurg Rev ; 47(1): 158, 2024 Apr 16.
Artigo em Inglês | MEDLINE | ID: mdl-38625445

RESUMO

This critique provides a critical analysis of the outcomes following occipito-cervical fusion in patients with Ehlers-Danlos syndromes (EDS) and craniocervical instability. The study examines the efficacy of the surgical intervention and evaluates its impact on patient outcomes. While the article offers valuable insights into the management of EDS-related craniocervical instability, several limitations and areas for improvement are identified, including sample size constraints, the absence of a control group, and the need for long-term follow-up data. Future research efforts should focus on addressing these concerns to optimize treatment outcomes for individuals with EDS.


Assuntos
Publicações , Fusão Vertebral , Humanos , Tamanho da Amostra
4.
J Am Heart Assoc ; 13(8): e034115, 2024 Apr 16.
Artigo em Inglês | MEDLINE | ID: mdl-38606770

RESUMO

BACKGROUND: We performed a review of acute stroke trials to determine features associated with premature termination of trial enrollment, defined by the authors as not meeting preplanned sample size. METHODS AND RESULTS: MEDLINE was searched for randomized clinical stroke trials published in 9 major clinical journals between 2013 and 2022. We included randomized clinical trials that were phase 2 or 3 with a preplanned sample size ≥100 and a time-to-treatment within 24 hours of onset for transient ischemic attack, ischemic stroke, or intracerebral hemorrhage. Data were abstracted on trial features including trial design, inclusion criteria, imaging, location and number of sites, masking, treatment complexity, control group (standard therapy, placebo), industry involvement, and preplanned stopping rules (futility and efficacy). Least absolute shrinkage and selection operator regression was used to select the most important factors associated with premature termination; then, a multivariable logistic regression was fit including only the least absolute shrinkage and selection operator selected variables. Of 1475 studies assessed, 98 trials met eligibility criteria. Forty-five (46%) trials were prematurely terminated, of which 27% were stopped for benefit/efficacy, 20% for lack of money/slow enrollment, 18% for futility, 16% for newly available evidence, 17% for other reasons, and 4% due to harm. Complex trials (adjusted odds ratio [aOR], 2.76 [95% CI, 1.13-7.49]), presence of a futility rule (aOR, 4.43 [95% CI, 1.62-17.91]), and exclusion of prestroke dependency (none/slight disability only; aOR, 2.19 [95% CI, 0.84-6.72] versus dependency allowed) were identified as the strongest predictors. CONCLUSIONS: Nearly half of acute stroke trials were terminated prematurely. Broadening inclusion criteria and simplifying trial design may decrease the likelihood of unplanned termination, whereas planned futility analyses may appropriately terminate trials early, saving money and resources.


Assuntos
Ataque Isquêmico Transitório , AVC Isquêmico , Acidente Vascular Cerebral , Humanos , Acidente Vascular Cerebral/terapia , Acidente Vascular Cerebral/tratamento farmacológico , Hemorragia Cerebral , Tamanho da Amostra
5.
Lasers Med Sci ; 39(1): 98, 2024 Apr 07.
Artigo em Inglês | MEDLINE | ID: mdl-38583109

RESUMO

AIM: The aim of the present study was to evaluate the efficacy of 30°-angled Er:YAG laser tip and different periodontal instruments on root surface roughness and morphology in vitro. METHODS: Eighteen bovine teeth root without carious lesion were decoronated from the cementoenamel junction and seperated longitidunally. A total of 36 obtained blocks were mounted in resin blocks and polished with silicon carbide papers under water irrigation. These blocks were randomly assigned into 3 treatment groups. In Group 1, 30°-angled Er:YAG laser (2.94 µm) tip was applied onto the blocks with a 20 Hz, 120 mJ energy output under water irrigation for 20 s. In Groups 2 and 3, the same treatment was applied to the blocks with new generation ultrasonic tip and conventional curette apico-coronally for 20 s with a sweeping motion. Surface roughness and morphology were evaluated before and after instrumentation with a profilometer and SEM, respectively. RESULTS: After instrumentation, profilometric analysis revealed significantly higher roughness values compared to baseline in all treatment groups(p < 0.05). Laser group revealed the roughest surface morphology followed by conventional curette and new generation ultrasonic tip treatment groups (p < 0.05). In SEM analysis, irregular surfaces and crater defects were seen more frequently in the laser group. CONCLUSION: Results of the study showed that the use of new generation ultrasonic tip was associated with smoother surface morphology compared to 30°-angled Er-YAG laser tip and conventional curette. Further in vitro and in vivo studies with an increased sample size are necessary to support the present study findings.


Assuntos
Lasers de Estado Sólido , Animais , Bovinos , Lasers de Estado Sólido/uso terapêutico , Projetos de Pesquisa , Tamanho da Amostra , Colo do Dente , Água
6.
J Exp Psychol Gen ; 153(4): 1139-1151, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38587935

RESUMO

The calculation of statistical power has been taken up as a simple yet informative tool to assist in designing an experiment, particularly in justifying sample size. A difficulty with using power for this purpose is that the classical power formula does not incorporate sources of uncertainty (e.g., sampling variability) that can impact the computed power value, leading to a false sense of precision and confidence in design choices. We use simulations to demonstrate the consequences of adding two common sources of uncertainty to the calculation of power. Sampling variability in the estimated effect size (Cohen's d) can introduce a large amount of uncertainty (e.g., sometimes producing rather flat distributions) in power and sample-size determination. The addition of random fluctuations in the population effect size can cause values of its estimates to take on a sign opposite the population value, making calculated power values meaningless. These results suggest that calculated power values or use of such values to justify sample size add little to planning a study. As a result, researchers should put little confidence in power-based choices when planning future studies. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Assuntos
Incerteza , Humanos , Tamanho da Amostra
7.
Biometrics ; 80(2)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38591365

RESUMO

A spatial sampling design determines where sample locations are placed in a study area so that population parameters can be estimated with relatively high precision. If the response variable has spatial trends, spatially balanced or well-spread designs give precise results for commonly used estimators. This article proposes a new method that draws well-spread samples over arbitrary auxiliary spaces and can be used for master sampling applications. All we require is a measure of the distance between population units. Numerical results show that the method generates well-spread samples and compares favorably with existing designs. We provide an example application using several auxiliary variables to estimate total aboveground biomass over a large study area in Eastern Amazonia, Brazil. Multipurpose surveys are also considered, where the totals of aboveground biomass, primary production, and clay content (3 responses) are estimated from a single well-spread sample over the auxiliary space.


Assuntos
Tamanho da Amostra , Inquéritos e Questionários
8.
Br J Math Stat Psychol ; 77(2): 289-315, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38591555

RESUMO

Popular statistical software provides the Bayesian information criterion (BIC) for multi-level models or linear mixed models. However, it has been observed that the combination of statistical literature and software documentation has led to discrepancies in the formulas of the BIC and uncertainties as to the proper use of the BIC in selecting a multi-level model with respect to level-specific fixed and random effects. These discrepancies and uncertainties result from different specifications of sample size in the BIC's penalty term for multi-level models. In this study, we derive the BIC's penalty term for level-specific fixed- and random-effect selection in a two-level nested design. In this new version of BIC, called BIC E 1 , this penalty term is decomposed into two parts if the random-effect variance-covariance matrix has full rank: (a) a term with the log of average sample size per cluster and (b) the total number of parameters times the log of the total number of clusters. Furthermore, we derive the new version of BIC, called BIC E 2 , in the presence of redundant random effects. We show that the derived formulae, BIC E 1 and BIC E 2 , adhere to empirical values via numerical demonstration and that BIC E ( E indicating either E 1 or E 2 ) is the best global selection criterion, as it performs at least as well as BIC with the total sample size and BIC with the number of clusters across various multi-level conditions through a simulation study. In addition, the use of BIC E 1 is illustrated with a textbook example dataset.


Assuntos
Software , Tamanho da Amostra , Teorema de Bayes , Modelos Lineares , Simulação por Computador
9.
Bioinformatics ; 40(4)2024 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-38569898

RESUMO

MOTIVATION: Research is improving our understanding of how the microbiome interacts with the human body and its impact on human health. Existing machine learning methods have shown great potential in discriminating healthy from diseased microbiome states. However, Machine Learning based prediction using microbiome data has challenges such as, small sample size, imbalance between cases and controls and high cost of collecting large number of samples. To address these challenges, we propose a deep learning framework phylaGAN to augment the existing datasets with generated microbiome data using a combination of conditional generative adversarial network (C-GAN) and autoencoder. Conditional generative adversarial networks train two models against each other to compute larger simulated datasets that are representative of the original dataset. Autoencoder maps the original and the generated samples onto a common subspace to make the prediction more accurate. RESULTS: Extensive evaluation and predictive analysis was conducted on two datasets, T2D study and Cirrhosis study showing an improvement in mean AUC using data augmentation by 11% and 5% respectively. External validation on a cohort classifying between obese and lean subjects, with a smaller sample size provided an improvement in mean AUC close to 32% when augmented through phylaGAN as compared to using the original cohort. Our findings not only indicate that the generative adversarial networks can create samples that mimic the original data across various diversity metrics, but also highlight the potential of enhancing disease prediction through machine learning models trained on synthetic data. AVAILABILITY AND IMPLEMENTATION: https://github.com/divya031090/phylaGAN.


Assuntos
Benchmarking , Microbiota , Humanos , Aprendizado de Máquina , Tamanho da Amostra
10.
Biom J ; 66(3): e2300240, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38637304

RESUMO

Rank methods are well-established tools for comparing two or multiple (independent) groups. Statistical planning methods for the computing the required sample size(s) to detect a specific alternative with predefined power are lacking. In the present paper, we develop numerical algorithms for sample size planning of pseudo-rank-based multiple contrast tests. We discuss the treatment effects and different ways to approximate variance parameters within the estimation scheme. We further compare pairwise with global rank methods in detail. Extensive simulation studies show that the sample size estimators are accurate. A real data example illustrates the application of the methods.


Assuntos
Algoritmos , Modelos Estatísticos , Tamanho da Amostra , Simulação por Computador
11.
Biom J ; 66(3): e2300175, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38637326

RESUMO

In screening large populations a diagnostic test is frequently used repeatedly. An example is screening for bowel cancer using the fecal occult blood test (FOBT) on several occasions such as at 3 or 6 days. The question that is addressed here is how often should we repeat a diagnostic test when screening for a specific medical condition. Sensitivity is often used as a performance measure of a diagnostic test and is considered here for the individual application of the diagnostic test as well as for the overall screening procedure. The latter can involve an increasingly large number of repeated applications, but how many are sufficient? We demonstrate the issues involved in answering this question using real data on bowel cancer at St Vincents Hospital in Sydney. As data are only available for those testing positive at least once, an appropriate modeling technique is developed on the basis of the zero-truncated binomial distribution which allows for population heterogeneity. The latter is modeled using discrete nonparametric maximum likelihood. If we wish to achieve an overall sensitivity of 90%, the FOBT should be repeated for 2 weeks instead of the 1 week that was used at the time of the survey. A simulation study also shows consistency in the sense that bias and standard deviation for the estimated sensitivity decrease with an increasing number of repeated occasions as well as with increasing sample size.


Assuntos
Neoplasias Colorretais , Humanos , Neoplasias Colorretais/diagnóstico , Sangue Oculto , Tamanho da Amostra , Testes Diagnósticos de Rotina , Programas de Rastreamento/métodos
12.
BMJ Open ; 14(4): e077132, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38626966

RESUMO

OBJECTIVE: International trials can be challenging to operationalise due to incompatibilities between country-specific policies and infrastructures. The aim of this systematic review was to identify the operational complexities of conducting international trials and identify potential solutions for overcoming them. DESIGN: Systematic review. DATA SOURCES: Medline, Embase and Health Management Information Consortium were searched from 2006 to 30 January 2023. ELIGIBILITY CRITERIA: All studies reporting operational challenges (eg, site selection, trial management, intervention management, data management) of conducting international trials were included. DATA EXTRACTION AND SYNTHESIS: Search results were independently screened by at least two reviewers and data were extracted into a proforma. RESULTS: 38 studies (35 RCTs, 2 reports and 1 qualitative study) fulfilled the inclusion criteria. The median sample size was 1202 (IQR 332-4056) and median number of sites was 40 (IQR 13-78). 88.6% of studies had an academic sponsor and 80% were funded through government sources. Operational complexities were particularly reported during trial set-up due to lack of harmonisation in regulatory approvals and in relation to sponsorship structure, with associated budgetary impacts. Additional challenges included site selection, staff training, lengthy contract negotiations, site monitoring, communication, trial oversight, recruitment, data management, drug procurement and distribution, pharmacy involvement and biospecimen processing and transport. CONCLUSIONS: International collaborative trials are valuable in cases where recruitment may be difficult, diversifying participation and applicability. However, multiple operational and regulatory challenges are encountered when implementing a trial in multiple countries. Careful planning and communication between trials units and investigators, with an emphasis on establishing adequately resourced cross-border sponsorship structures and regulatory approvals, may help to overcome these barriers and realise the benefits of the approach. OPEN SCIENCE FRAMEWORK REGISTRATION NUMBER: osf-registrations-yvtjb-v1.


Assuntos
Farmácia , Humanos , Tamanho da Amostra , Orçamentos
13.
PeerJ ; 12: e17128, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38562994

RESUMO

Background: Interaction identification is important in epidemiological studies and can be detected by including a product term in the model. However, as Rothman noted, a product term in exponential models may be regarded as multiplicative rather than additive to better reflect biological interactions. Currently, the additive interaction is largely measured by the relative excess risk due to interaction (RERI), the attributable proportion due to interaction (AP), and the synergy index (S), and confidence intervals are developed via frequentist approaches. However, few studies have focused on the same issue from a Bayesian perspective. The present study aims to provide a Bayesian view of the estimation and credible intervals of the additive interaction measures. Methods: Bayesian logistic regression was employed, and estimates and credible intervals were calculated from posterior samples of the RERI, AP and S. Since Bayesian inference depends only on posterior samples, it is very easy to apply this method to preventive factors. The validity of the proposed method was verified by comparing the Bayesian method with the delta and bootstrap approaches in simulation studies with example data. Results: In all the simulation studies, the Bayesian estimates were very close to the corresponding true values. Due to the skewness of the interaction measures, compared with the confidence intervals of the delta method, the credible intervals of the Bayesian approach were more balanced and matched the nominal 95% level. Compared with the bootstrap method, the Bayesian method appeared to be a competitive alternative and fared better when small sample sizes were used. Conclusions: The proposed Bayesian method is a competitive alternative to other methods. This approach can assist epidemiologists in detecting additive-scale interactions.


Assuntos
Teorema de Bayes , Simulação por Computador , Modelos Logísticos , Estudos Epidemiológicos , Tamanho da Amostra
14.
Biom J ; 66(3): e2300094, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38581099

RESUMO

Conditional power (CP) serves as a widely utilized approach for futility monitoring in group sequential designs. However, adopting the CP methods may lead to inadequate control of the type II error rate at the desired level. In this study, we introduce a flexible beta spending function tailored to regulate the type II error rate while employing CP based on a predetermined standardized effect size for futility monitoring (a so-called CP-beta spending function). This function delineates the expenditure of type II error rate across the entirety of the trial. Unlike other existing beta spending functions, the CP-beta spending function seamlessly incorporates beta spending concept into the CP framework, facilitating precise stagewise control of the type II error rate during futility monitoring. In addition, the stopping boundaries derived from the CP-beta spending function can be calculated via integration akin to other traditional beta spending function methods. Furthermore, the proposed CP-beta spending function accommodates various thresholds on the CP-scale at different stages of the trial, ensuring its adaptability across different information time scenarios. These attributes render the CP-beta spending function competitive among other forms of beta spending functions, making it applicable to any trials in group sequential designs with straightforward implementation. Both simulation study and example from an acute ischemic stroke trial demonstrate that the proposed method accurately captures expected power, even when the initially determined sample size does not consider futility stopping, and exhibits a good performance in maintaining overall type I error rates for evident futility.


Assuntos
AVC Isquêmico , Projetos de Pesquisa , Humanos , Tamanho da Amostra , Simulação por Computador , Futilidade Médica
15.
Vet Med Sci ; 10(3): e1444, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38581306

RESUMO

BACKGROUND: Genome-wide association studies (GWAS) is a useful tool for the detection of disease or quantitative trait-related genetic variations in the veterinary field. For a binary trait, a case/control experiment is designed in GWAS. However, there is limited information on the optimal case/control and sample size in GWAS. OBJECTIVES: In this study, it was aimed to detect the effects of case/control ratio and sample size for GWAS using computer simulation under certain assumptions. METHOD: Using the PLINK software, we simulated three different disease scenarios. In scenario 1, we simulated 10 different case/control ratios with increasing ratio of cases to controls. In scenario 2, we did versa of scenario 1 with the increasing ratio of controls to cases. In scenarios 1 and 2, sample size gradually was increased with the change case/control ratios. In scenario 3, the total sample size was fixed to 2000 to see real effects of case/control ratio on the number of disease-related single nucleotide polymorphisms (SNPs). RESULTS: The results showed that the number of disease-related SNPs were the highest when the case/control ratio is close to 1:1 in scenarios 1 and 2 and did not change with an increase in sample size. Similarly, the number of disease-related SNPs was the highest in case/control ratios 1:1 in scenario 3. However, unbalanced case/control ratio caused the detection of lower number of disease-related SNPs in scenario 3. The estimated average power of SNPs was highest when case/control ratio is 1:1 in all scenarios. CONCLUSIONS: All findings led to the conclusion that an increase in sample size may enhance the statistical power of GWAS when the number of cases is small. In addition, case/control ratio 1:1 may be the optimal ratio for GWAS. These findings may be valuable not only for veterinary field but also for human clinical experiments.


Assuntos
Estudo de Associação Genômica Ampla , Polimorfismo de Nucleotídeo Único , Humanos , Animais , Estudo de Associação Genômica Ampla/veterinária , Estudo de Associação Genômica Ampla/métodos , Simulação por Computador , Tamanho da Amostra , Fenótipo
16.
Brief Bioinform ; 25(3)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38581417

RESUMO

Untargeted metabolomics based on liquid chromatography-mass spectrometry technology is quickly gaining widespread application, given its ability to depict the global metabolic pattern in biological samples. However, the data are noisy and plagued by the lack of clear identity of data features measured from samples. Multiple potential matchings exist between data features and known metabolites, while the truth can only be one-to-one matches. Some existing methods attempt to reduce the matching uncertainty, but are far from being able to remove the uncertainty for most features. The existence of the uncertainty causes major difficulty in downstream functional analysis. To address these issues, we develop a novel approach for Bayesian Analysis of Untargeted Metabolomics data (BAUM) to integrate previously separate tasks into a single framework, including matching uncertainty inference, metabolite selection and functional analysis. By incorporating the knowledge graph between variables and using relatively simple assumptions, BAUM can analyze datasets with small sample sizes. By allowing different confidence levels of feature-metabolite matching, the method is applicable to datasets in which feature identities are partially known. Simulation studies demonstrate that, compared with other existing methods, BAUM achieves better accuracy in selecting important metabolites that tend to be functionally consistent and assigning confidence scores to feature-metabolite matches. We analyze a COVID-19 metabolomics dataset and a mouse brain metabolomics dataset using BAUM. Even with a very small sample size of 16 mice per group, BAUM is robust and stable. It finds pathways that conform to existing knowledge, as well as novel pathways that are biologically plausible.


Assuntos
Metabolômica , Camundongos , Animais , Teorema de Bayes , Tamanho da Amostra , Incerteza , Metabolômica/métodos , Simulação por Computador
17.
PLoS Biol ; 22(4): e3002456, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38603525

RESUMO

A recent article claimed that researchers need not increase the overall sample size for a study that includes both sexes. This Formal Comment points out that that study assumed two sexes to have the same variance, and explains why this is a unrealistic assumption.


Assuntos
Projetos de Pesquisa , Masculino , Feminino , Humanos , Tamanho da Amostra
18.
Stat Med ; 43(10): 2007-2042, 2024 May 10.
Artigo em Inglês | MEDLINE | ID: mdl-38634309

RESUMO

Quantile regression, known as a robust alternative to linear regression, has been widely used in statistical modeling and inference. In this paper, we propose a penalized weighted convolution-type smoothed method for variable selection and robust parameter estimation of the quantile regression with high dimensional longitudinal data. The proposed method utilizes a twice-differentiable and smoothed loss function instead of the check function in quantile regression without penalty, and can select the important covariates consistently using the efficient gradient-based iterative algorithms when the dimension of covariates is larger than the sample size. Moreover, the proposed method can circumvent the influence of outliers in the response variable and/or the covariates. To incorporate the correlation within each subject and enhance the accuracy of the parameter estimation, a two-step weighted estimation method is also established. Furthermore, we prove the oracle properties of the proposed method under some regularity conditions. Finally, the performance of the proposed method is demonstrated by simulation studies and two real examples.


Assuntos
Algoritmos , Modelos Estatísticos , Humanos , Simulação por Computador , Modelos Lineares , Tamanho da Amostra
19.
Stat Med ; 43(10): 1973-1992, 2024 May 10.
Artigo em Inglês | MEDLINE | ID: mdl-38634314

RESUMO

The expected value of the standard power function of a test, computed with respect to a design prior distribution, is often used to evaluate the probability of success of an experiment. However, looking only at the expected value might be reductive. Instead, the whole probability distribution of the power function induced by the design prior can be exploited. In this article we consider one-sided testing for the scale parameter of exponential families and we derive general unifying expressions for cumulative distribution and density functions of the random power. Sample size determination criteria based on alternative summaries of these functions are discussed. The study sheds light on the relevance of the choice of the design prior in order to construct a successful experiment.


Assuntos
Teorema de Bayes , Humanos , Probabilidade , Tamanho da Amostra
20.
BMC Med Res Methodol ; 24(1): 82, 2024 Apr 05.
Artigo em Inglês | MEDLINE | ID: mdl-38580928

RESUMO

BACKGROUND: This retrospective analysis aimed to comprehensively review the design and regulatory aspects of bioequivalence trials submitted to the Saudi Food and Drug Authority (SFDA) since 2017. METHODS: This was a retrospective, comprehensive analysis study. The Data extracted from the SFDA bioequivalence assessment reports were analyzed for reviewing the overall design and regulatory aspects of the successful bioequivalence trials, exploring the impact of the coefficient of variation of within-subject variability (CVw) on some design aspects, and providing an in-depth assessment of bioequivalence trial submissions that were deemed insufficient in demonstrating bioequivalence. RESULTS: A total of 590 bioequivalence trials were included of which 521 demonstrated bioequivalence (440 single active pharmaceutical ingredients [APIs] and 81 fixed combinations). Most of the successful trials were for cardiovascular drugs (84 out of 521 [16.1%]), and the 2 × 2 crossover design was used in 455 (87.3%) trials. The sample size tended to increase with the increase in the CVw in trials of single APIs. Biopharmaceutics Classification System Class II and IV drugs accounted for the majority of highly variable drugs (58 out of 82 [70.7%]) in the study. Most of the 51 rejected trials were rejected due to concerns related to the study center (n = 21 [41.2%]). CONCLUSION: This comprehensive analysis provides valuable insights into the regulatory and design aspects of bioequivalence trials and can inform future research and assist in identifying opportunities for improvement in conducting bioequivalence trials in Saudi Arabia.


Assuntos
Medicamentos Genéricos , Humanos , Equivalência Terapêutica , Medicamentos Genéricos/uso terapêutico , Arábia Saudita , Estudos Retrospectivos , Tamanho da Amostra
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...